imperial college london
Multitask Learning with Learned Task Relationships
Classical consensus-based strategies for federated and decentralized learning are statistically suboptimal in the presence of heterogeneous local data or task distributions. As a result, in recent years, there has been growing interest in multitask or personalized strategies, which allow individual agents to benefit from one another in pursuing locally optimal models without enforcing consensus. Existing strategies require either precise prior knowledge of the underlying task relationships or are fully non-parametric and instead rely on meta-learning or proximal constructions. In this work, we introduce an algorithmic framework that strikes a balance between these extremes. By modeling task relationships through a Gaussian Markov Random Field with an unknown precision matrix, we develop a strategy that jointly learns both the task relationships and the local models, allowing agents to self-organize in a way consistent with their individual data distributions. Our theoretical analysis quantifies the quality of the learned relationship, and our numerical experiments demonstrate its practical effectiveness.
- Europe > United Kingdom > England > Greater London > London (0.40)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
Advancing Brainwave Modeling with a Codebook-Based Foundation Model
Barmpas, Konstantinos, Lee, Na, Panagakis, Yannis, Adamos, Dimitrios A., Laskaris, Nikolaos, Zafeiriou, Stefanos
Recent advances in large-scale pre-trained Electroencephalogram (EEG) models have shown great promise, driving progress in Brain-Computer Interfaces (BCIs) and healthcare applications. However, despite their success, many existing pre-trained models have struggled to fully capture the rich information content of neural oscillations, a limitation that fundamentally constrains their performance and generalizability across diverse BCI tasks. This limitation is frequently rooted in suboptimal architectural design choices which constrain their representational capacity. In this work, we introduce LaBraM++, an enhanced Large Brainwave Foundation Model (LBM) that incorporates principled improvements grounded in robust signal processing foundations. LaBraM++ demonstrates substantial gains across a variety of tasks, consistently outperforming its originally-based architecture and achieving competitive results when compared to other open-source LBMs. Its superior performance and training efficiency highlight its potential as a strong foundation for future advancements in LBMs.
- North America > United States (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Greece > Central Macedonia > Thessaloniki (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.68)
- Information Technology > Artificial Intelligence > Vision (0.68)
- Information Technology > Artificial Intelligence > Cognitive Science (0.68)
Multi-Agent Reasoning for Cardiovascular Imaging Phenotype Analysis
Zhang, Weitong, Qiao, Mengyun, Zang, Chengqi, Niederer, Steven, Matthews, Paul M, Bai, Wenjia, Kainz, Bernhard
Identifying associations between imaging phenotypes, disease risk factors, and clinical outcomes is essential for understanding disease mechanisms. However, traditional approaches rely on human-driven hypothesis testing and selection of association factors, often overlooking complex, non-linear dependencies among imaging phenotypes and other multi-modal data. To address this, we introduce Multi-agent Exploratory Synergy for the Heart (MESHAgents): a framework that leverages large language models as agents to dynamically elicit, surface, and decide confounders and phenotypes in association studies. Specifically, we orchestrate a multi-disciplinary team of AI agents, which spontaneously generate and converge on insights through iterative, self-organizing reasoning. The framework dynamically synthesizes statistical correlations with multi-expert consensus, providing an automated pipeline for phenome-wide association studies (PheWAS). We demonstrate the system's capabilities through a population-based study of imaging phenotypes of the heart and aorta. MESHAgents autonomously uncovered correlations between imaging phenotypes and a wide range of non-imaging factors, identifying additional confounder variables beyond standard demographic factors. Validation on diagnosis tasks reveals that MESHAgents-discovered phenotypes achieve performance comparable to expert-selected phenotypes, with mean AUC differences as small as $-0.004_{\pm0.010}$ on disease classification tasks. Notably, the recall score improves for 6 out of 9 disease types. Our framework provides clinically relevant imaging phenotypes with transparent reasoning, offering a scalable alternative to expert-driven methods.
- Europe > United Kingdom > England > Greater London > London (0.05)
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.04)
- Europe > Germany > Bavaria > Middle Franconia > Nuremberg (0.04)
Dangerous heart conditions detected in seconds with AI stethoscope
Board-certified cardiothoracic surgeon Dr. Jeremy London, based in Savannah, Georgia, explains why VO2 max and muscle mass are the main indicators of longevity. The first artificial intelligence (AI) stethoscope has gone beyond listening to a heartbeat. Researchers at Imperial College London and Imperial College Healthcare NHS Trust discovered that an AI stethoscope can detect heart failure at an early stage. The TRICORDER study results, published in BMJ Journals, found that the AI-enabled stethoscope can help doctors identify three heart conditions in just 15 seconds. According to the British Heart Foundation (BHF), which partially funded the study, the researchers analyzed data from more than 1.5 million patients, focusing on people with heart failure symptoms like breathlessness, swelling and fatigue.
- North America > United States > Georgia > Chatham County > Savannah (0.25)
- North America > United States > California (0.05)
- Europe > Spain > Galicia > Madrid (0.05)
Evaluating Differentially Private Generation of Domain-Specific Text
Sun, Yidan, Schlegel, Viktor, Nandakumar, Srinivasan, Zahid, Iqra, Wu, Yuping, Del-Pinto, Warren, Nenadic, Goran, Lam, Siew-Kei, Zhang, Jie, Bharath, Anil A
Generative AI offers transformative potential for high-stakes domains such as healthcare and finance, yet privacy and regulatory barriers hinder the use of real-world data. To address this, differentially private synthetic data generation has emerged as a promising alternative. In this work, we introduce a unified benchmark to systematically evaluate the utility and fidelity of text datasets generated under formal Differential Privacy (DP) guarantees. Our benchmark addresses key challenges in domain-specific benchmarking, including choice of representative data and realistic privacy budgets, accounting for pre-training and a variety of evaluation metrics. We assess state-of-the-art privacy-preserving generation methods across five domain-specific datasets, revealing significant utility and fidelity degradation compared to real data, especially under strict privacy constraints. These findings underscore the limitations of current approaches, outline the need for advanced privacy-preserving data sharing methods and set a precedent regarding their evaluation in realistic scenarios.
- Asia > Singapore > Central Region > Singapore (0.41)
- Asia > South Korea > Seoul > Seoul (0.05)
- Europe > United Kingdom > England > Greater Manchester > Manchester (0.04)
- (4 more...)
- Research Report > Promising Solution (0.46)
- Research Report > New Finding (0.34)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (1.00)
- Education > Educational Setting > Higher Education (0.41)
- Information Technology > Data Science > Data Mining (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.69)
- Information Technology > Artificial Intelligence > Natural Language > Generation (0.67)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.66)
Doctors develop AI stethoscope that can detect major heart conditions in 15 seconds
Doctors have successfully developed an artificial intelligence-led stethoscope that can detect three heart conditions in 15 seconds. Invented in 1816, the traditional stethoscope – used to listen to sounds within the body – has been a vital part of every medic's toolkit for more than two centuries. Now a team have designed a hi-tech upgrade with AI capabilities that can diagnose heart failure, heart valve disease and abnormal heart rhythms almost instantly. The new stethoscope developed by researchers at Imperial College London and Imperial College healthcare NHS trust can analyse tiny differences in heartbeat and blood flow undetectable to the human ear, and take a rapid ECG at the same time. Details of the breakthrough, which could boost early diagnosis of the three conditions, were presented to thousands of doctors at the European Society of Cardiology annual congress in Madrid, the world's largest heart conference.
- Europe > United Kingdom (0.38)
- Europe > Spain > Galicia > Madrid (0.25)
- North America > United States > California (0.05)
AI could be about to completely change the way we do mathematics
Is an artificial intelligence revolution about to transform mathematics? Some prominent mathematicians think so, thanks to automated tools that can help write proofs suddenly showing impressive leaps in capability, with the potential to change the way maths research is done. Around 100 of the world's top mathematicians gathered at the University of Cambridge in June for a conference whose theme was based on whether computers might help mathematicians resolve some long-standing problems over how to check that their proofs were correct. This process, known as formalisation, doesn't necessarily have to involve artificial intelligence, and indeed a similar meeting held at Cambridge in 2017 made no mention of AI. But eight years later, AI has come on by leaps and bounds, most notably with the success of large language models powering tools like ChatGPT.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.26)
- North America > United States > Pennsylvania (0.05)
- Europe > Netherlands > South Holland > Leiden (0.05)
Robot Talk Episode 126 – Why are we building humanoid robots?
Research into humanoid robots is a rapidly advancing field, with companies around the world striving to produce robots that look and act more like us. But what is it about recreating ourselves in robot form that we find so captivating? Why do humanoid robots both enthral and terrify us? And is our obsession with robotic humans just vanity, or could they play valuable roles in our future society? In this special live recording at Imperial College London as part of the Great Exhibition Road Festival, Claire chatted to Ben Russell (Science Museum), Maryam Banitalebi Dehkordi (University of Hertfordshire) and Petar Kormushev (Imperial College London) about humanoid robotics.
- Europe > United Kingdom > England > Hertfordshire (0.30)
- Europe > Italy (0.07)
- Asia > Malaysia (0.07)
Confidence-based Intent Prediction for Teleoperation in Bimanual Robotic Suturing
Hu, Zhaoyang Jacopo, Xu, Haozheng, Kim, Sion, Li, Yanan, Baena, Ferdinando Rodriguez y, Burdet, Etienne
--Robotic-assisted procedures offer enhanced precision, but while fully autonomous systems are limited in task knowledge, difficulties in modeling unstructured environments, and generalisation abilities, fully manual teleoperated systems also face challenges such as delay, stability, and reduced sensory information. T o address these, we developed an interactive control strategy that assists the human operator by predicting their motion plan at both high and low levels. At the high level, a surgeme recognition system is employed through a Transformer-based real-time gesture classification model to dynamically adapt to the operator's actions, while at the low level, a Confidence-based Intention Assimilation Controller adjusts robot actions based on user intent and shared control paradigms. The system is built around a robotic suturing task, supported by sensors that capture the kinematics of the robot and task dynamics. Experiments across users with varying skill levels demonstrated the effectiveness of the proposed approach, showing statistically significant improvements in task completion time and user satisfaction compared to traditional teleoperation. N traditional teleoperation the human operator fully controls the robot's movements [1]. Robots like the da Vinci Surgical System are equipped with sensors and models offering valuable local information inaccessible to the human operator, such as during visual occlusions or operations with different sensory modalities. By spanning across the spectrum between traditional fully manual teleoperation and full autonomy, shared control leverages the benefits of both to enhance teleoperation with the robot's sensory data and control [2]. While demonstrated for suturing assistance [3], [4], these methods overlook the impact on positional uncertainty, environmental unknowns, or instrument errors. For example, robotic surgery cameras are frequently occluded by body tissues or parts of the robot [5].
- Europe > United Kingdom (0.05)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- Research Report > New Finding (0.47)
- Research Report > Experimental Study (0.47)
- Health & Medicine > Surgery (1.00)
- Health & Medicine > Health Care Technology (1.00)
Contrastive Representation Learning Helps Cross-institutional Knowledge Transfer: A Study in Pediatric Ventilation Management
Liu, Yuxuan, Han, Jinpei, Ramnarayan, Padmanabhan, Faisal, A. Aldo
Machine learning has shown promising results in clinical decision support, particularly for complex intensive care settings [Gottesman et al., 2019]. However, developing robust models faces significant challenges: limited data availability, variations in clinical practices across institutions, and restricted data sharing. These constraints often result in models that perform well locally but fail to generalize across different clinical settings [McDermott et al., 2021]. This cross-site generalization problem represents a fundamental challenge in the real-world application of clinical ML, particularly when dealing with longitudinal patient data in Electronic Healthcare Records (EHR). Recent advances in generative AI and large foundation models have demonstrated the power of self-supervised representation learning in capturing transferable features from unlabeled data [Bommasani et al., 2021, Brown, 2020]. This capacity is particularly valuable for EHR applications, where obtaining high-quality labeled data is both costly and resource-intensive. Despite growing interest and successful applications of self-supervised learning to EHR time series data [Rasmy et al., 2021, Tu et al., 2024, Wornow et al., 2023], downstream evaluations have largely been restricted to single-institution settings, where test data, though held out, still originates from the same underlying population as the
- Europe > United Kingdom > England > Greater London > London (0.05)
- Europe > Germany > Bavaria > Upper Franconia > Bayreuth (0.04)